47 research outputs found
Analysis of the hands in egocentric vision: A survey
Egocentric vision (a.k.a. first-person vision - FPV) applications have
thrived over the past few years, thanks to the availability of affordable
wearable cameras and large annotated datasets. The position of the wearable
camera (usually mounted on the head) allows recording exactly what the camera
wearers have in front of them, in particular hands and manipulated objects.
This intrinsic advantage enables the study of the hands from multiple
perspectives: localizing hands and their parts within the images; understanding
what actions and activities the hands are involved in; and developing
human-computer interfaces that rely on hand gestures. In this survey, we review
the literature that focuses on the hands using egocentric vision, categorizing
the existing approaches into: localization (where are the hands or parts of
them?); interpretation (what are the hands doing?); and application (e.g.,
systems that used egocentric hand cues for solving a specific problem).
Moreover, a list of the most prominent datasets with hand-based annotations is
provided
An Effective and Efficient Method for Detecting Hands in Egocentric Videos for Rehabilitation Applications
Objective: Individuals with spinal cord injury (SCI) report upper limb
function as their top recovery priority. To accurately represent the true
impact of new interventions on patient function and independence, evaluation
should occur in a natural setting. Wearable cameras can be used to monitor hand
function at home, using computer vision to automatically analyze the resulting
videos (egocentric video). A key step in this process, hand detection, is
difficult to do robustly and reliably, hindering deployment of a complete
monitoring system in the home and community. We propose an accurate and
efficient hand detection method that uses a simple combination of existing
detection and tracking algorithms. Methods: Detection, tracking, and
combination methods were evaluated on a new hand detection dataset, consisting
of 167,622 frames of egocentric videos collected on 17 individuals with SCI
performing activities of daily living in a home simulation laboratory. Results:
The F1-scores for the best detector and tracker alone (SSD and Median Flow)
were 0.900.07 and 0.420.18, respectively. The best combination
method, in which a detector was used to initialize and reset a tracker,
resulted in an F1-score of 0.870.07 while being two times faster than the
fastest detector alone. Conclusion: The combination of the fastest detector and
best tracker improved the accuracy over online trackers while improving the
speed of detectors. Significance: The method proposed here, in combination with
wearable cameras, will help clinicians directly measure hand function in a
patient's daily life at home, enabling independence after SCI.Comment: 7 pages, 3 figures, 5 table
Hand contour detection in wearable camera video using an adaptive histogram region of interest
BACKGROUND: Monitoring hand function at home is needed to better evaluate the effectiveness of rehabilitation interventions. Our objective is to develop wearable computer vision systems for hand function monitoring. The specific aim of this study is to develop an algorithm that can identify hand contours in video from a wearable camera that records the userâs point of view, without the need for markers. METHODS: The two-step image processing approach for each frame consists of: (1) Detecting a hand in the image, and choosing one seed point that lies within the hand. This step is based on a priori models of skin colour. (2) Identifying the contour of the region containing the seed point. This is accomplished by adaptively determining, for each frame, the region within a colour histogram that corresponds to hand colours, and backprojecting the image using the reduced histogram. RESULTS: In four test videos relevant to activities of daily living, the hand detector classification accuracy was 88.3%. The contour detection results were compared to manually traced contours in 97 test frames, and the median F-score was 0.86. CONCLUSION: This algorithm will form the basis for a wearable computer-vision system that can monitor and log the interactions of the hand with its environment
A Fast EEG Forecasting Algorithm for Phase-Locked Transcranial Electrical Stimulation of the Human Brain
A growing body of research suggests that non-invasive electrical brain stimulation can more effectively modulate neural activity when phase-locked to the underlying brain rhythms. Transcranial alternating current stimulation (tACS) can potentially stimulate the brain in-phase to its natural oscillations as recorded by electroencephalography (EEG), but matching these oscillations is a challenging problem due to the complex and time-varying nature of the EEG signals. Here we address this challenge by developing and testing a novel approach intended to deliver tACS phase-locked to the activity of the underlying brain region in real-time. This novel approach extracts phase and frequency from a segment of EEG, then forecasts the signal to control the stimulation. A careful tuning of the EEG segment length and prediction horizon is required and has been investigated here for different EEG frequency bands. The algorithm was tested on EEG data from 5 healthy volunteers. Algorithm performance was quantified in terms of phase-locking values across a variety of EEG frequency bands. Phase-locking performance was found to be consistent across individuals and recording locations. With current parameters, the algorithm performs best when tracking oscillations in the alpha band (8â13 Hz), with a phase-locking value of 0.77 ± 0.08. Performance was maximized when the frequency band of interest had a dominant frequency that was stable over time. The algorithm performs faster, and provides better phase-locked stimulation, compared to other recently published algorithms devised for this purpose. The algorithm is suitable for use in future studies of phase-locked tACS in preclinical and clinical applications
Measuring hand use in the home after cervical spinal cord injury using egocentric video
Background: Egocentric video has recently emerged as a potential solution for
monitoring hand function in individuals living with tetraplegia in the
community, especially for its ability to detect functional use in the home
environment. Objective: To develop and validate a wearable vision-based system
for measuring hand use in the home among individuals living with tetraplegia.
Methods: Several deep learning algorithms for detecting functional hand-object
interactions were developed and compared. The most accurate algorithm was used
to extract measures of hand function from 65 hours of unscripted video recorded
at home by 20 participants with tetraplegia. These measures were: the
percentage of interaction time over total recording time (Perc); the average
duration of individual interactions (Dur); the number of interactions per hour
(Num). To demonstrate the clinical validity of the technology, egocentric
measures were correlated with validated clinical assessments of hand function
and independence (Graded Redefined Assessment of Strength, Sensibility and
Prehension - GRASSP, Upper Extremity Motor Score - UEMS, and Spinal Cord
Independent Measure - SCIM). Results: Hand-object interactions were
automatically detected with a median F1-score of 0.80 (0.67-0.87). Our results
demonstrated that higher UEMS and better prehension were related to greater
time spent interacting, whereas higher SCIM and better hand sensation resulted
in a higher number of interactions performed during the egocentric video
recordings. Conclusions: For the first time, measures of hand function
automatically estimated in an unconstrained environment in individuals with
tetraplegia have been validated against internationally accepted measures of
hand function. Future work will necessitate a formal evaluation of the
reliability and responsiveness of the egocentric-based performance measures for
hand use
Hand contour detection in wearable camera video using an adaptive histogram region of interest
Abstract
Background
Monitoring hand function at home is needed to better evaluate the effectiveness of rehabilitation interventions. Our objective is to develop wearable computer vision systems for hand function monitoring. The specific aim of this study is to develop an algorithm that can identify hand contours in video from a wearable camera that records the userâs point of view, without the need for markers.
Methods
The two-step image processing approach for each frame consists of: (1) Detecting a hand in the image, and choosing one seed point that lies within the hand. This step is based on a priori models of skin colour. (2) Identifying the contour of the region containing the seed point. This is accomplished by adaptively determining, for each frame, the region within a colour histogram that corresponds to hand colours, and backprojecting the image using the reduced histogram.
Results
In four test videos relevant to activities of daily living, the hand detector classification accuracy was 88.3%. The contour detection results were compared to manually traced contours in 97 test frames, and the median F-score was 0.86.
Conclusion
This algorithm will form the basis for a wearable computer-vision system that can monitor and log the interactions of the hand with its environment
Rehabilitation technologies and interventions for individuals with spinal cord injury: translational potential of current trends
Abstract In the past, neurorehabilitation for individuals with neurological damage, such as spinal cord injury (SCI), was focused on learning compensatory movements to regain function. Presently, the focus of neurorehabilitation has shifted to functional neurorecovery, or the restoration of function through repetitive movement training of the affected limbs. Technologies, such as robotic devices and electrical stimulation, are being developed to facilitate repetitive motor training; however, their implementation into mainstream clinical practice has not been realized. In this commentary, we examined how current SCI rehabilitation research aligns with the potential for clinical implementation. We completed an environmental scan of studies in progress that investigate a physical intervention promoting functional neurorecovery. We identified emerging interventions among the SCI population, and evaluated the strengths and gaps of the current direction of SCI rehabilitation research. Seventy-three study postings were retrieved through website and database searching. Study objectives, outcome measures, participant characteristics and the mode(s) of intervention being studied were extracted from the postings. The FAME (Feasibility, Appropriateness, Meaningfulness, Effectiveness, Economic Evidence) Framework was used to evaluate the strengths and gaps of the research with respect to likelihood of clinical implementation. Strengths included aspects of Feasibility, as the research was practical, aspects of Appropriateness as the research aligned with current scientific literature on motor learning, and Effectiveness, as all trials aimed to evaluate the effect of an intervention on a clinical outcome. Aspects of Feasibility were also identified as a gap; with two thirds of the studies examining emerging technologies, the likelihood of successful clinical implementation was questionable. As the interventions being studied may not align with the preferences of clinicians and priorities of patients, the Appropriateness of these interventions for the current health care environment was questioned. Meaningfulness and Economic Evidence were also identified as gaps since few studies included measures reflecting the perceptions of the participants or economic factors, respectively. The identified gaps will likely impede the clinical uptake of many of the interventions currently being studied. Future research may lessen these gaps through a staged approach to the consideration of the FAME elements as novel interventions and technologies are developed, evaluated and implemented
Functional motor preservation below the level of injury in subjects with American Spinal Injury Association Impairment Scale grade A spinal cord injuries
OBJECTIVE: To assess how frequently subjects with spinal cord injuries (SCIs) classified as American Spinal Injury Association Impairment Scale (AIS) grade A have substantial preserved motor function below the neurologic level of injury, despite having no preserved sensory or motor function at the S4-5 spinal cord segment. DESIGN: Analysis of the European Multicenter Study about Spinal Cord Injury database to determine how frequently subjects assessed as AIS A would have been AIS D based on motor scores alone (ie, had scores of â„3 in at least half of the International Standards for Neurological Classification of Spinal Cord Injury [ISNCSCI] key muscles below the neurologic level of injury, despite having no sacral sparing). SETTING: Eighteen European centers. PARTICIPANTS: Individuals with traumatic SCI at any level (total of 2557 assessments). INTERVENTIONS: Not applicable. MAIN OUTCOME MEASURE: ISNCSCI assessments. RESULTS: Over the first year after SCI (with assessments at approximately 1, 4, 12, 24, and 48wk) and for all rostrocaudal levels of injury, only 3.2% of AIS A assessments were found to meet the AIS D motor score criteria. The percentage was highest for lumbar (16.3%) and lower thoracic (4.4%) SCI. No trends were observed across time points. CONCLUSIONS: These results suggest that the low frequency of individuals with an AIS A classification and high levels of motor function are not a significant concern in subject recruitment for clinical trials, unless the level of SCI is within the lumbar cord
Compensation Strategies for Bioelectric Signal Changes in Chronic Selective Nerve Cuff Recordings: A Simulation Study
Peripheral nerve interfaces (PNIs) allow us to extract motor, sensory, and autonomic information from the nervous system and use it as control signals in neuroprosthetic and neuromodulation applications. Recent efforts have aimed to improve the recording selectivity of PNIs, including by using spatiotemporal patterns from multi-contact nerve cuff electrodes as input to a convolutional neural network (CNN). Before such a methodology can be translated to humans, its performance in chronic implantation scenarios must be evaluated. In this simulation study, approaches were evaluated for maintaining selective recording performance in the presence of two chronic implantation challenges: the growth of encapsulation tissue and rotation of the nerve cuff electrode. Performance over time was examined in three conditions: training the CNN at baseline only, supervised re-training with explicitly labeled data at periodic intervals, and a semi-supervised self-learning approach. This study demonstrated that a selective recording algorithm trained at baseline will likely fail over time due to changes in signal characteristics resulting from the chronic challenges. Results further showed that periodically recalibrating the selective recording algorithm could maintain its performance over time, and that a self-learning approach has the potential to reduce the frequency of recalibration